-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] enable lora for sdxl adapters too and add slow tests. #5555
Conversation
LoRA related additions look good to me! Are there any cool results that you would like so share with us here? For the tests, I will defer to @DN6. |
tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
Outdated
Show resolved
Hide resolved
tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍🏽
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Thanks, great. Do you know when it will be merged? |
tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
The documentation is not available anymore as the PR was closed or merged. |
@ilisparrow I think you would need to run |
Great job @ilisparrow ! |
Actually this PR was very much incorrect, SDXL Adapter already had the LoRA load mixin and the tests had errors. Reverting this PR here: #5555 |
Hello, I might be missing something, but it fixed a real problem, and was reproduced by Sayakpaul. |
That might have been the case because I reproduced it with the stable release of diffusers. Installing from main might have actually solved it: diffusers/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py Line 127 in 75ea54a
I apologize for the oversight caused on my part. |
…gface#5555) * Enable lora for sdxl adapters too. Issue huggingface#5516 * fix: assertion values. * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Changed imports orders to pass tests Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by: Ilias A <iliasamri00@gmail.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
…gface#5555) * Enable lora for sdxl adapters too. Issue huggingface#5516 * fix: assertion values. * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Changed imports orders to pass tests Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by: Ilias A <iliasamri00@gmail.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
…gface#5555) * Enable lora for sdxl adapters too. Issue huggingface#5516 * fix: assertion values. * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Use numpy_cosine_similarity_distance on the arrays Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * Changed imports orders to pass tests Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by: Ilias A <iliasamri00@gmail.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Fixes #5516
Additionally, adds SLOW tests for SDXL T2i adapters.
Hello,
as requested here is a PR to fix the #5516, WIth it's corresponding tests.
I have a few questions :
torch_dtype=torch.float16
, But your PR that inspired mine didn't have it(https://github.com/huggingface/diffusers/pull/4666/files).This is the error that I get with it :
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
.to("CPU")
to the pipeline and the adapter :Pipelines loaded with 'torch_dtype=torch.float16' cannot run with 'cpu' device. It is not recommended to move them to 'cpu' as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for'float16' operations on this device in PyTorch. Please, remove the 'torch_dtype=torch.float16' argument, or use another device for inference.
I hope this helps.
@sayakpaul